114 research outputs found

    Introduction to the Special Section on Reputation in Agent Societies

    Get PDF
    This special section includes papers from the 'Reputation in Agent Societies' workshop held as part of 2004 IEEE/WIC/ACM International Joint Conference on Intelligent Agent Technology (IAT'04) and Web Intelligence (WI'04), September 20, 2004 in Beijing, China. The purpose of this workshop was to promote multidisciplinary collaboration for Reputation Systems modeling and implementation. Reputation is increasingly at the centre of attention in many fields of science and domains of application, including economics, organisations science, policy-making, (e-)governance, cultural evolution, social dilemmas, socio-dynamics, innofusion, etc. However, the result of all this attention is a great number of ad hoc models and little integration of instruments for the implementation, management and optimisation of reputation. On the one hand, entrepreneurs and administrators manage corporate and firm reputation without contributing to or accessing a solid, general and integrated body of scientific knowledge on the subject matter. On the other hand, software designers believe they can design and implement online reputation reporting systems without investigating what the properties, requirements and dynamics of reputation in natural societies are and why it evolved. We promoted the workshop and this special section with the hope of setting the first steps in the direction of a new, cross-disciplinary approach to reputation, accounting for the social cognitive mechanisms and processes that support it and working towards t a consensus on essential guidelines for designing or shaping reputation technologies.Reputation, Agent Systems

    Repage: REPutation and ImAGE Among Limited Autonomous Partners

    Get PDF
    This paper introduces Repage, a computational system that adopts a cognitive theory of reputation. We propose a fundamental difference between image and reputation, which suggests a way out from the paradox of sociality, i.e. the trade-off between agents' autonomy and their need to adapt to social environment. On one hand, agents are autonomous if they select partners based on their social evaluations (images). On the other, they need to update evaluations by taking into account others'. Hence, social evaluations must circulate and be represented as "reported evaluations" (reputation), before and in order for agents to decide whether to accept them or not. To represent this level of cognitive detail in artificial agents' design, there is a need for a specialised subsystem, which we are in the course of developing for the public domain. In the paper, after a short presentation of the cognitive theory of reputation and its motivations, we describe the implementation of Repage.Reputation, Agent Systems, Cognitive Design, Fuzzy Evaluation

    Propagation of opinions in Structural Graphs

    Get PDF
    Trust and reputation measures are crucial in distributed open systems where agents need to decide whom or what to choose. Existing work has mainly focused on the reputation of single entities, neglecting their position amongst others and its effect on the propagation of trust. This paper presents an algorithm for the propagation of reputation in structural graphs. It allows agents to infer their opinion about unfamiliar entities based on their view of related entities. The proposed mechanism focuses on the >part of> relation to illustrate how reputation may ?ow (or propagate) from one entity to another. The paper bases its reputation measures on opinions, which it defines as probability distributions over an evaluation space, providing a richer representation of opinions.This work has been supported by both the LiquidPublications project (http://project.LiquidPub.org/), funded by the FET programme under the European Commission’s FET-Open grant number 213360, and the Agreement Technologies project (http://www.agreementtechnologies.org/), funded by CONSOLIDER CSD 2007-0022, INGENIO 2010.Peer Reviewe

    Topology and Memory Effect on Convention Emergence

    Full text link
    Abstract—Social conventions are useful self-sustaining proto-cols for groups to coordinate behavior without a centralized entity enforcing coordination. We perform an in-depth study of different network structures, to compare and evaluate the effects of different network topologies on the success and rate of emergence of social conventions. While others have investigated memory for learning algorithms, the effects of memory or history of past activities on the reward received by interacting agents have not been adequately investigated. We propose a reward metric that takes into consideration the past action choices of the interacting agents. The research question to be answered is what effect does the history based reward function and the learning approach have on convergence time to conventions in different topologies. We experimentally investigate the effects of history size, agent population size and neighborhood size or the emergence of social conventions. I

    Trust and reputation for agent societies

    Get PDF
    Consultable des del TDXTítol obtingut de la portada digitalitzadaVegeu jsmresum1de1.pd

    CHARMS: A Charter Management System. Automating the Integration of Electronic Institutions and Humans

    Get PDF
    The execution of process models is usually presented through a graphical user interface, especially when users¿ input is required. Existing mechanisms, such as Electronic Institutions (EIs), provide means to easily specify and automatically execute process models. However, every time the specification is modified, the graphical user interface (GUI) needed during the execution stage should be manually modified accordingly. This paper proposes a system that helps maintain such GUIs in an efficient and automated manner. We present and test Charms, a system built on top of EIs that allows the automatic creation and update of GUIs based on the provided process model specification. © 2012 Copyright Taylor and Francis Group, LLC.This work has been supported by: the LiquidPublications project (project.LiquidPub.org), funded by the Seventh Framework Programme for Research of the European Commission under fet-open grant number 213360; the Agreement Technologies project (www.agreement-technologies.org), funded by consolider csd 2007-0022, ingenio 2010; and the cbit project on community-building information technology, funded by the Spanish Ministry of Science and Innovation under grant number TIN2010-16306.Peer Reviewe

    Reputation-based decisions for logic-based cognitive agents

    Get PDF
    Computational trust and reputation models have been recognized as one of the key technologies required to design and implement agent systems. These models manage and aggregate the information needed by agents to efficiently perform partner selection in uncertain situations. For simple applications, a game theoretical approach similar to that used in most models can suffice. However, if we want to undertake problems found in socially complex virtual societies, we need more sophisticated trust and reputation systems. In this context, reputation-based decisions that agents make take on special relevance and can be as important as the reputation model itself. In this paper, we propose a possible integration of a cognitive reputation model, Repage, into a cognitive BDI agent. First, we specify a belief logic capable to capture the semantics of Repage information, which encodes probabilities. This logic is defined by means of a two first-order languages hierarchy, allowing the specification of axioms as first-order theories. The belief logic integrates the information coming from Repage in terms if image and reputation, and combines them, defining a typology of agents depending of such combination. We use this logic to build a complete graded BDI model specified as a multi-context system where beliefs, desires, intentions and plans interact among each other to perform a BDI reasoning. We conclude the paper with an example and a related work section that compares our approach with current state-of-the-art models. © 2010 The Author(s).This work was supported by the projects AEI (TIN2006-15662-C02-01), AT (CONSOLIDER CSD20070022, INGENIO 2010), LiquidPub (STREP FP7-213360), RepBDI (Intramural 200850I136) and by the Generalitat de Catalunya under the grant 2005-SGR-00093.Peer Reviewe

    Punish and Voice: Punishment Enhances Cooperation when Combined with Norm-Signalling

    Get PDF
    Material punishment has been suggested to play a key role in sustaining human cooperation. Experimental findings, however, show that inflicting mere material costs does not always increase cooperation and may even have detrimental effects. Indeed, ethnographic evidence suggests that the most typical punishing strategies in human ecologies (e.g., gossip, derision, blame and criticism) naturally combine normative information with material punishment. Using laboratory experiments with humans, we show that the interaction of norm communication and material punishment leads to higher and more stable cooperation at a lower cost for the group than when used separately. In this work, we argue and provide experimental evidence that successful human cooperation is the outcome of the interaction between instrumental decision-making and the norm psychology humans are provided with. Norm psychology is a cognitive machinery to detect and reason upon norms that is characterized by a salience mechanism devoted to track how much a norm is prominent within a group. We test our hypothesis both in the laboratory and with an agent-based model. The agent-based model incorporates fundamental aspects of norm psychology absent from previous work. The combination of these methods allows us to provide an explanation for the proximate mechanisms behind the observed cooperative behaviour. The consistency between the two sources of data supports our hypothesis that cooperation is a product of norm psychology solicited by norm-signalling and coercive devices. © 2013 Andrighetto et al.The work presented in this paper has been performed in the frame of the following projects: 1. MacNorms(Intramurales de frontera CSIC – PIF08-007); 2. GLODERS (Grant: Gloders 315874; FP7/2007-2013); 3. The Spanish Ministerio de Economía y Competitividad (Grant: ECO2011-29847-C02-01); 4. The Generalitat de Catalunya (Grant: 2009 SGR 820 and Grant 2009SGR1434); and 5. The Antoni Serra Ramoneda Research Chair (UAB-CatalunyaCaixa)Peer Reviewe

    Counter-punishment, norm-communication and accountability

    Get PDF
    Presentado como comunicación en HEIDI-CORTEX Behavioral Economics Workshop GATE, celebrado los días 29 y 30 de octubre de 2014 en Ecully (France). Presentado como comunicación el 17 de abril de 2015 en el Second International Meeting on Experimental and Behavioral Social Sciences, celebrado del 15 al 17 de abril de 2015 en Toulouse (Francia)We study whether communication can limit the negative consequences of the use of counterpunishment in a public goods environment. The two dimensions of communication we study are norm communication and accountability, having to justify one’s actions to others. We conduct four experimental treatments, all involving a contribution stage, a punishment stage and a counterpunishment stage. In the first stage there are no communication possibilities. The second treatment allows for communication at the punishment stage and the third asks for a justification message at the counter-punishment stage. The fourth combines the two communication channels of the second and third treatments. We find that the three treatments involving communication at any of the two relevant stages lead to significantly higher contributions than the baseline treatment. The detrimental effect of allowing for counter-punishment is neutralized in the presence of communication possibilities. We find no difference between the three treatments with communication. Separately norm communication and being held accountable work equally well and we find no interaction effect from using them jointly. We also relate our results to those of other treatments without counter-punishment opportunities. The overall pattern of results shows that the key factor is the presence of communication. Whenever it is possible contributions are higher than when it is not, regardless of counter-punishment opportunitiesThe work presented in this paper has been performed in the frame of the MacNorms project (Intramurales de frontera CSIC – PIF08-007) and of the Gloders project (Seventh Framework EU Programme FP7/2007-2013 Grant: 315874). The authors thank the Institute of Cognitive Sciences and Technologies (ISTC-CNR, Rome), the European University Institute, the Spanish Ministerio de Economía y Competitividad (Grant: ECO2011-29847-C02-01) and the Generalitat de Catalunya (Grant: 2009 SGR 820 and Grant 2009SGR1434) for research supportPeer Reviewe

    On the integration of trust with negotiation, argumentation and semantics

    Get PDF
    Agreement Technologies are needed for autonomous agents to come to mutually acceptable agreements, typically on behalf of humans. These technologies include trust computing, negotiation, argumentation and semantic alignment. In this paper, we identify a number of open questions regarding the integration of computational models and tools for trust computing with negotiation, argumentation and semantic alignment. We consider these questions in general and in the context of applications in open, distributed settings such as the grid and cloud computing. © 2013 Cambridge University Press.This work was partially supported by the Agreement Technology COST action (IC0801). The authors would like to thank for helpful discussions and comments all participants in the panel on >Trust, Argumentation and Semantics> on 16 December 2009, Agia Napa, CyprusPeer Reviewe
    • …
    corecore